representation dataset
Review for NeurIPS paper: GPU-Accelerated Primal Learning for Extremely Fast Large-Scale Classification
Summary and Contributions: Trust Region Newton Algorithm (TRON) is the most efficient solver for L2 regularized primal problems e.g. Due to the complex and sequential nature of this algo., its past performance boosts have largely been driven by shared memory multi-core systems. This paper demonstrates significant speedups in the training time of TRON solver compared to multithreaded implementations by using GPU specific optimization principles. The authors apply specific optimizations on sparse representation (LR training) and dense representation problems (SVM training) to generate significant speedups in their training time using GPUs. Specifically, for sparse feature representation datasets and LR loss function, the authors prescribe optimizations that minimize sequential dependence of CPU/GPU execution on each other by assuming all conditional branches evaluate in favor of the high-compute operations that can be run pre-emptively on the GPU.
- Information Technology > Hardware (1.00)
- Information Technology > Graphics (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.37)
A Brain-inspired Computational Model for Human-like Concept Learning
Concept learning is a fundamental aspect of human cognition and plays a critical role in mental processes such as categorization, reasoning, memory, and decision-making. Researchers across various disciplines have shown consistent interest in the process of concept acquisition in individuals. To elucidate the mechanisms involved in human concept learning, this study examines the findings from computational neuroscience and cognitive psychology. These findings indicate that the brain's representation of concepts relies on two essential components: multisensory representation and text-derived representation. These two types of representations are coordinated by a semantic control system, ultimately leading to the acquisition of concepts. Drawing inspiration from this mechanism, the study develops a human-like computational model for concept learning based on spiking neural networks. By effectively addressing the challenges posed by diverse sources and imbalanced dimensionality of the two forms of concept representations, the study successfully attains human-like concept representations. Tests involving similar concepts demonstrate that our model, which mimics the way humans learn concepts, yields representations that closely align with human cognition.
- Asia > China > Beijing > Beijing (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)